-
Notifications
You must be signed in to change notification settings - Fork 87
[GuideLLM Refactor] Fix from-file #366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GuideLLM Refactor] Fix from-file #366
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good looks good; nit on organization.
DataType = ( | ||
Iterable[str] | ||
| Iterable[dict[str, Any]] | ||
| Dataset | ||
| DatasetDict | ||
| IterableDataset | ||
| IterableDatasetDict | ||
| str | ||
| Path | ||
) | ||
|
||
OutputFormatType = ( | ||
tuple[str, ...] | ||
| list[str] | ||
| dict[str, str | dict[str, Any] | GenerativeBenchmarkerOutput] | ||
| None | ||
) | ||
|
||
|
||
async def initialize_backend( | ||
backend: BackendType | Backend, | ||
target: str, | ||
model: str | None, | ||
backend_kwargs: dict[str, Any] | None, | ||
) -> Backend: | ||
backend = ( | ||
Backend.create( | ||
backend, target=target, model=model, **(backend_kwargs or {}) | ||
) | ||
if not isinstance(backend, Backend) | ||
else backend | ||
) | ||
await backend.process_startup() | ||
await backend.validate() | ||
return backend | ||
|
||
|
||
async def resolve_profile( | ||
constraint_inputs: dict[str, int | float], | ||
profile: Profile | str | None, | ||
rate: list[float] | None, | ||
random_seed: int, | ||
constraints: dict[str, ConstraintInitializer | Any], | ||
): | ||
for key, val in constraint_inputs.items(): | ||
if val is not None: | ||
constraints[key] = val | ||
if not isinstance(profile, Profile): | ||
if isinstance(profile, str): | ||
profile = Profile.create( | ||
rate_type=profile, | ||
rate=rate, | ||
random_seed=random_seed, | ||
constraints={**constraints}, | ||
) | ||
else: | ||
raise ValueError(f"Expected string for profile; got {type(profile)}") | ||
|
||
elif constraints: | ||
raise ValueError( | ||
"Constraints must be empty when providing a Profile instance. " | ||
f"Provided constraints: {constraints} ; provided profile: {profile}" | ||
) | ||
return profile | ||
|
||
async def resolve_output_formats( | ||
output_formats: OutputFormatType, | ||
output_path: str | Path | None, | ||
) -> dict[str, GenerativeBenchmarkerOutput]: | ||
output_formats = GenerativeBenchmarkerOutput.resolve( | ||
output_formats=(output_formats or {}), output_path=output_path | ||
) | ||
return output_formats | ||
|
||
async def finalize_outputs( | ||
report: GenerativeBenchmarksReport, | ||
resolved_output_formats: dict[str, GenerativeBenchmarkerOutput] | ||
): | ||
output_format_results = {} | ||
for key, output in resolved_output_formats.items(): | ||
output_result = await output.finalize(report) | ||
output_format_results[key] = output_result | ||
return output_format_results | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets move all of this to a new file (initializers.py
maybe); entrypoints.py
is kind of our public interface for ABI usage so I think its best only have the front-facing functions in here. Also allows for some type reuse in scenarios.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To bring the conversation online: I thought so too, but it appears that Mark would prefer them to be in the same file. To clarify, I added comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Above comment is non-blocking
Signed-off-by: Jared O'Connell <joconnel@redhat.com>
Signed-off-by: Jared O'Connell <joconnel@redhat.com>
Also remove unused import Signed-off-by: Jared O'Connell <joconnel@redhat.com>
b17f644
to
f926e5b
Compare
ab5466b
into
vllm-project:features/refactor/working
Summary
This PR ports the new functionality from
benchmark run
tobenchmark from-file
, and does so in a way that reuses as much code as practical to have one source of truth.Details
Test Plan
Run a benchmark with an output of json or yaml, and use
from-file
to re-import it and export it. You can select any output type supported bybenchmark run
.guidellm benchmark from-file ./result.json --output-formats console
guidellm benchmark from-file ./result.yaml --output-formats yaml
Related Issues
Use of AI
## WRITTEN BY AI ##
)